Goto

Collaborating Authors

 arithmetic circuit


TrustworthyMonteCarlo

Neural Information Processing Systems

Wepresent an orchestration of the computations such that theoutcome isaccompanied withaproofofcorrectness thatcanbeverifiedwith substantially less computational resources than it takes to run the computations fromscratch withstate-of-the-art algorithms. Specifically,weadopt analgebraic proof system developed incomputational complexity theory,inwhich theproof is represented by a polynomial; evaluating the polynomial at a random point amounts to a verification of the proof with probabilistic guarantees.



Learning from logical constraints with lower- and upper-bound arithmetic circuits

AIHub

In the road traffic example, the network predicts probabilities for each agent's identity, action and position. At inference, logical rules are evaluated using these predictions. The resulting satisfaction degree is then used to update the network so that future predictions better align with the knowledge constraints, as illustrated in Figure 2.



Tractable Operations for Arithmetic Circuits of Probabilistic Models

Neural Information Processing Systems

We consider tractable representations of probability distributions and the polytime operations they support. In particular, we consider a recently proposed arithmetic circuit representation, the Probabilistic Sentential Decision Diagram (PSDD). We show that PSDD supports a polytime multiplication operator, while they do not support a polytime operator for summing-out variables. A polytime multiplication operator make PSDDs suitable for a broader class of applications compared to arithmetic circuits, which do not in general support multiplication. As one example, we show that PSDD multiplication leads to a very simple but effective compilation algorithm for probabilistic graphical models: represent each model factor as a PSDD, and then multiply them.





JSTprove: Pioneering Verifiable AI for a Trustless Future

Gold, Jonathan, Freiberg, Tristan, Isah, Haruna, Shahabi, Shirin

arXiv.org Artificial Intelligence

The integration of machine learning (ML) systems into critical industries such as healthcare, finance, and cybersecurity has transformed decision-making processes, but it also brings new challenges around trust, security, and accountability. As AI systems become more ubiquitous, ensuring the transparency and correctness of AI-driven decisions is crucial, especially when they have direct consequences on privacy, security, or fairness. Verifiable AI, powered by Zero-Knowledge Machine Learning (zkML), offers a robust solution to these challenges. zkML enables the verification of AI model inferences without exposing sensitive data, providing an essential layer of trust and privacy. However, traditional zkML systems typically require deep cryptographic expertise, placing them beyond the reach of most ML engineers. In this paper, we introduce JSTprove, a specialized zkML toolkit, built on Polyhedra Network's Expander backend, to enable AI developers and ML engineers to generate and verify proofs of AI inference. JSTprove provides an end-to-end verifiable AI inference pipeline that hides cryptographic complexity behind a simple command-line interface while exposing auditable artifacts for reproducibility. We present the design, innovations, and real-world use cases of JSTprove as well as our blueprints and tooling to encourage community review and extension. JSTprove therefore serves both as a usable zkML product for current engineering needs and as a reproducible foundation for future research and production deployments of verifiable AI.